9 research outputs found
New Characterizations and Efficient Local Search for General Integer Linear Programming
Integer linear programming (ILP) models a wide range of practical
combinatorial optimization problems and has significant impacts in industry and
management sectors. This work proposes new characterizations of ILP with the
concept of boundary solutions. Motivated by the new characterizations, we
develop an efficient local search solver, which is the first local search
solver for general ILP validated on a large heterogeneous problem dataset. We
propose a new local search framework that switches between three modes, namely
Search, Improve, and Restore modes. We design tailored operators adapted to
different modes, thus improving the quality of the current solution according
to different situations. For the Search and Restore modes, we propose an
operator named tight move, which adaptively modifies variables' values, trying
to make some constraint tight. For the Improve mode, an efficient operator lift
move is proposed to improve the quality of the objective function while
maintaining feasibility. Putting these together, we develop a local search
solver for integer linear programming called Local-ILP. Experiments conducted
on the MIPLIB dataset show the effectiveness of our solver in solving
large-scale hard integer linear programming problems within a reasonably short
time. Local-ILP is competitive and complementary to the state-of-the-art
commercial solver Gurobi and significantly outperforms the state-of-the-art
non-commercial solver SCIP. Moreover, our solver establishes new records for 6
MIPLIB open instances. The theoretical analysis of our algorithm is also
presented, which shows our algorithm could avoid visiting unnecessary regions
and also maintain good connectivity of targeted solutions.Comment: 36 pages, 2 figures, 7 table
Approximation Strategies for Generalized Binary Search in Weighted Trees
International audienceWe consider the following generalization of the binary search problem. A search strategy is required to locate an unknown target node in a given tree . Upon querying a node of the tree, the strategy receives as a reply an indication of the connected component of containing the target . The cost of querying each node is given by a known non-negative weight function, and the considered objective is to minimize the total query cost for a worst-case choice of the target. Designing an optimal strategy for a weighted tree search instance is known to be strongly NP-hard, in contrast to the unweighted variant of the problem which can be solved optimally in linear time. Here, we show that weighted tree search admits a quasi-polynomial time approximation scheme: for any , there exists a -approximation strategy with a computation time of . Thus, the problem is not APX-hard, unless . By applying a generic reduction, we obtain as a corollary that the studied problem admits a polynomial-time -approximation. This improves previous -approximation approaches, where the -notation disregards -factors
Aspects de l'efficacité dans des problÚmes sélectionnés pour des calculs sur les graphes de grande taille
Cette thĂšse prĂ©sente trois travaux liĂ©s Ă la conception dâalgorithmes efficaces applicables Ă des graphes de grande taille. Dans le premier travail, nous nous plaçons dans le cadre du calcul centralisĂ©, et ainsi la question de la gĂ©nĂ©ralisation des dĂ©compositions modulaires et de la conception dâun algorithme efficace pour ce problĂšme. La dĂ©composition modulaire et la dĂ©tection de module, sont des moyens de rĂ©vĂ©ler et dâanalyser les propriĂ©tĂ©s modulaires de donnĂ©es structurĂ©es. Comme la dĂ©composition modulaire classique est bien Ă©tudiĂ©e et possĂšde un algorithme de temps linĂ©aire optimal, nous Ă©tudions dâabord les gĂ©nĂ©ralisations de ces concepts en hypergraphes. Câest un sujet peu Ă©tudiĂ© mais qui permet de trouver de nouvelles structurations dans les familles de parties. Nous prĂ©sentons ici des rĂ©sultats positifs obtenus pour trois dĂ©finitions de la dĂ©composition modulaire dans les hypergraphes de la littĂ©rature. Nous considĂ©rons Ă©galement la gĂ©nĂ©ralisation en permettant des erreurs dans les modules de graphes classiques et prĂ©sentons des rĂ©sultats nĂ©gatifs pour deux telles dĂ©finitions. Le deuxiĂšme travail est sur des requĂȘtes de donnĂ©es dans un graphe. Ici, le modĂšle diffĂšre des scĂ©narios classiques dans le sens que nous ne concevons pas dâalgorithmes pour rĂ©soudre un problĂšme original, mais nous supposons quâil existe un oracle fournissant des informations partielles sur la solution de problĂšme initial, oĂč les oracle ont une consommation de temps ou de ressources de requĂȘte que nous modĂ©lisons en tant que coĂ»ts, et nous avons besoin dâun algorithme dĂ©cidant comment interroger efficacement lâoracle pour obtenir la solution exacte au problĂšme initial. LâefficacitĂ© ici concerne le coĂ»t de la requĂȘte. Nous Ă©tudions un problĂšme de la mĂ©thode de dichotomie gĂ©nĂ©ralisĂ©e pour lequel nous calculons une stratĂ©gie dâinterrogation efficace afin de trouver une cible cachĂ©e dans le graphe. Nous prĂ©sentons les rĂ©sultats de nos travaux sur lâapproximation de la stratĂ©gie optimale de recherche en dichotomie gĂ©nĂ©ralisĂ©e sur les arbres pondĂ©rĂ©s. Notre troisiĂšme travail est sur la question de lâefficacitĂ© de la mĂ©moire. La configuration dans laquelle nous Ă©tudions sont des calculs distribuĂ©s et avec la limitation en mĂ©moire. Plus prĂ©cisĂ©ment, chaque nĆud stocke ses donnĂ©es locales en Ă©changeant des donnĂ©es par transmission de messages et est en mesure de procĂ©der Ă des calculs locaux. Ceci est similaire au modĂšle LOCAL / CONGEST en calcul distribuĂ©, mais notre modĂšle requiert en outre que chaque nĆud ne puisse stocker quâun nombre constant de variables w.r.t. son degrĂ©. Ce modĂšle peut Ă©galement dĂ©crire des algorithmes naturels. Nous implĂ©mentons une procĂ©dure existante de repondĂ©ration multiplicative pour approximer le problĂšme de flux maximal sur ce modĂšle. Dâun point de vue mĂ©thodologique, les trois types dâefficacitĂ© que nous avons Ă©tudiĂ©es correspondent aux trois types de scĂ©narios suivants: â Le premier est le plus classique. ConsidĂ©rant un problĂšme, nous essayons de concevoir Ă la main lâalgorithme le plus efficace. â Dans le second, lâefficacitĂ© est considĂ©rĂ©e comme un objectif. Nous modĂ©lisons les coĂ»ts de requĂȘte comme une fonction objectif, et utilisons des techniques dâalgorithme dâapproximation pour obtenir la conception dâune stratĂ©gie efficace. â Dans le troisiĂšme, lâefficacitĂ© est en fait posĂ©e comme une contrainte de mĂ©moire et nous concevons un algorithme sous cette contrainte.This thesis presents three works on different aspects of efficiency of algorithm design for large scale graph computations. In the first work, we consider a setting of classical centralized computing, and we consider the question of generalizing modular decompositions and designing time efficient algorithm for this problem. Modular decomposition, and more broadly module detection, are ways to reveal and analyze modular properties in structured data. As the classical modular decomposition is well studied and have an optimal linear time algorithm, we firstly study the generalizations of these concepts to hypergraphs and present here positive results obtained for three definitions of modular decomposition in hypergraphs from the literature. We also consider the generalization of allowing errors in classical graph modules and present negative results for two this kind of definitions. The second work focuses on graph data query scenarios. Here the model differs from classical computing scenarios in that we are not designing algorithms to solve an original problem, but we assume that there is an oracle which provides partial information about the solution to the original problem, where oracle queries have time or resource consumption, which we model as costs, and we need to have an algorithm deciding how to efficiently query the oracle to get the exact solution to the original problem, thus here the efficiency is addressing to the query costs. We study the generalized binary search problem for which we compute an efficient query strategy to find a hidden target in graphs. We present the results of our work on approximating the optimal strategy of generalized binary search on weighted trees. Our third work draws attention to the question of memory efficiency. The setup in which we perform our computations is distributed and memory restricted. Specifically, every node stores its local data, exchanging data by message passing, and is able to proceed local computations. This is similar to the LOCAL/CONGEST model in distributed computing, but our model additionally requires that every node can only store a constant number of variables w.r.t. its degree. This model can also describe natural algorithms. We implement an existing procedure of multiplicative reweighting for approximating the maximum sât flow problem on this model, this type of methodology may potentially provide new opportunities for the field of local or natural algorithms. From a methodological point of view, the three types of efficiency concerns correspond to the following types of scenarios: the first one is the most classical one given the problem, we try to design by hand the more efficient algorithm; the second one, the efficiency is regarded as an objective function .where we model query costs as an objective function, and using approximation algorithm techniques to get a good design of efficient strategy; the third one, the efficiency is in fact posed as a constraint of memory and we design algorithm under this constraint
Aspects de lâefficacitĂ© dans des problĂšmes sĂ©lectionnĂ©s pour des calculs sur les graphes de grande taille
This thesis presents three works on different aspects of efficiency of algorithm design for large scale graph computations.In the first work, we consider a setting of classical centralized computing, and we consider the question of generalizing modular decompositions and designing time- efficient algorithm for this problem. Modular decomposition, and more broadly module detection, are ways to reveal and analyze modular properties in structured data. As the classical modular decomposition is well studied and have an optimal linear-time algorithm, we firstly study the generalizations of these concepts to hy- pergraphs and present here positive results obtained for three definitions of modular decomposition in hypergraphs from the literature. We also consider the generaliza- tion of allowing errors in classical graph modules and present negative results for two this kind of definitions.The second work focuses on graph data query scenarios. Here the model differs from classical computing scenarios in that we are not designing algorithms to solve an original problem, but we assume that there is an oracle which provides partial information about the solution to the original problem, where oracle queries have time or resource consumption, which we model as costs, and we need to have an algorithm deciding how to efficiently query the oracle to get the exact solution to the original problem, thus here the efficiency is addressing to the query costs. We study the generalized binary search problem for which we compute an efficient query strategy to find a hidden target in graphs. We present the results of our work on approximating the optimal strategy of generalized binary search on weighted trees.Our third work draws attention to the question of memory efficiency. The setup in which we perform our computations is distributed and memory-restricted. Specif- ically, every node stores its local data, exchanging data by message passing, and is able to proceed local computations. This is similar to the LOCAL/CONGEST model in distributed computing, but our model additionally requires that every node can only store a constant number of variables w.r.t. its degree. This model can also describe natural algorithms. We implement an existing procedure of multiplicative reweighting for approximating the maximum sât flow problem on this model, this type of methodology may potentially provide new opportunities for the field of local or natural algorithms.From a methodological point of view, the three types of efficiency concerns cor- respond to the following types of scenarios: the first one is the most classical one â given the problem, we try to design by hand the more efficient algorithm; the second one, the efficiency is regarded as an objective function â where we model query costs as an objective function, and using approximation algorithm techniques to get a good design of efficient strategy; the third one, the efficiency is in fact posed as a constraint of memory and we design algorithm under this constraint.Cette thĂšse prĂ©sente trois travaux liĂ©s Ă la conception dâalgorithmes efficaces ap- plicables Ă des graphes de grande taille.Dans le premier travail, nous nous plaçons dans le cadre du calcul centralisĂ©, et ainsi la question de la gĂ©nĂ©ralisation des dĂ©compositions modulaires et de la conception dâun algorithme efficace pour ce problĂšme. La dĂ©composition modulaire et la dĂ©tection de module, sont des moyens de rĂ©vĂ©ler et dâanalyser les propriĂ©tĂ©s modulaires de donnĂ©es structurĂ©es. Comme la dĂ©composition modulaire classique est bien Ă©tudiĂ©e et possĂšde un algorithme de temps linĂ©aire optimal, nous Ă©tudions dâabord les gĂ©nĂ©ralisations de ces concepts en hypergraphes. Câest un sujet peu Ă©tudiĂ© mais qui permet de trouver de nouvelles structurations dans les familles de parties. Nous prĂ©sentons ici des rĂ©sultats positifs obtenus pour trois dĂ©finitions de la dĂ©composition modulaire dans les hypergraphes de la littĂ©rature. Nous considĂ©rons Ă©galement la gĂ©nĂ©ralisation en permettant des erreurs dans les modules de graphes classiques et prĂ©sentons des rĂ©sultats nĂ©gatifs pour deux telles dĂ©finitions.Le deuxiĂšme travail est sur des requĂȘtes de donnĂ©es dans un graphe. Ici, le modĂšle diffĂšre des scĂ©narios classiques dans le sens que nous ne concevons pas dâalgorithmes pour rĂ©soudre un problĂšme original, mais nous supposons quâil existe un oracle four- nissant des informations partielles sur la solution de problĂšme initial, oĂč les oracle ont une consommation de temps ou de ressources de requĂȘte que nous modĂ©lisons en tant que coĂ»ts, et nous avons besoin dâun algorithme dĂ©cidant comment interroger efficacement lâoracle pour obtenir la solution exacte au problĂšme initial. LâefficacitĂ© ici concerne le coĂ»t de la requĂȘte. Nous Ă©tudions un problĂšme de la mĂ©thode de di- chotomie gĂ©nĂ©ralisĂ©e pour lequel nous calculons une stratĂ©gie dâinterrogation efficace afin de trouver une cible cachĂ©e dans le graphe. Nous prĂ©sentons les rĂ©sultats de nos travaux sur lâapproximation de la stratĂ©gie optimale de recherche en dichotomie gĂ©nĂ©ralisĂ©e sur les arbres pondĂ©rĂ©s.Notre troisiĂšme travail est sur la question de lâefficacitĂ© de la mĂ©moire. La config- uration dans laquelle nous Ă©tudions sont des calculs distribuĂ©s et avec la limitation en mĂ©moire. Plus prĂ©cisĂ©ment, chaque nĆud stocke ses donnĂ©es locales en Ă©changeant des donnĂ©es par transmission de messages et est en mesure de procĂ©der Ă des calculs locaux. Ceci est similaire au modĂšle LOCAL / CONGEST en calcul distribuĂ©, mais notre modĂšle requiert en outre que chaque nĆud ne puisse stocker quâun nombre constant de variables w.r.t. son degrĂ©. Ce modĂšle peut Ă©galement dĂ©crire des al- gorithmes naturels. Nous implĂ©mentons une procĂ©dure existante de repondĂ©ration multiplicative pour approximer le problĂšme de flux maximal sur ce modĂšle.Dâun point de vue mĂ©thodologique, les trois types dâefficacitĂ© que nous avons Ă©tudiĂ©es correspondent aux trois types de scĂ©narios suivants:â Le premier est le plus classique. ConsidĂ©rant un problĂšme, nous essayons de concevoir Ă la main lâalgorithme le plus efficace.â Dans le second, lâefficacitĂ© est considĂ©rĂ©e comme un objectif. Nous mod- Ă©lisons les coĂ»ts de requĂȘte comme une fonction objectif, et utilisons des techniques dâalgorithme dâapproximation pour obtenir la conception dâune stratĂ©gie efficace.â Dans le troisiĂšme, lâefficacitĂ© est en fait posĂ©e comme une contrainte de mĂ©moire et nous concevons un algorithme sous cette contrainte
Aspects de lâefficacitĂ© dans des problĂšmes sĂ©lectionnĂ©s pour des calculs sur les graphes de grande taille
This thesis presents three works on different aspects of efficiency of algorithm design for large scale graph computations.In the first work, we consider a setting of classical centralized computing, and we consider the question of generalizing modular decompositions and designing time- efficient algorithm for this problem. Modular decomposition, and more broadly module detection, are ways to reveal and analyze modular properties in structured data. As the classical modular decomposition is well studied and have an optimal linear-time algorithm, we firstly study the generalizations of these concepts to hy- pergraphs and present here positive results obtained for three definitions of modular decomposition in hypergraphs from the literature. We also consider the generaliza- tion of allowing errors in classical graph modules and present negative results for two this kind of definitions.The second work focuses on graph data query scenarios. Here the model differs from classical computing scenarios in that we are not designing algorithms to solve an original problem, but we assume that there is an oracle which provides partial information about the solution to the original problem, where oracle queries have time or resource consumption, which we model as costs, and we need to have an algorithm deciding how to efficiently query the oracle to get the exact solution to the original problem, thus here the efficiency is addressing to the query costs. We study the generalized binary search problem for which we compute an efficient query strategy to find a hidden target in graphs. We present the results of our work on approximating the optimal strategy of generalized binary search on weighted trees.Our third work draws attention to the question of memory efficiency. The setup in which we perform our computations is distributed and memory-restricted. Specif- ically, every node stores its local data, exchanging data by message passing, and is able to proceed local computations. This is similar to the LOCAL/CONGEST model in distributed computing, but our model additionally requires that every node can only store a constant number of variables w.r.t. its degree. This model can also describe natural algorithms. We implement an existing procedure of multiplicative reweighting for approximating the maximum sât flow problem on this model, this type of methodology may potentially provide new opportunities for the field of local or natural algorithms.From a methodological point of view, the three types of efficiency concerns cor- respond to the following types of scenarios: the first one is the most classical one â given the problem, we try to design by hand the more efficient algorithm; the second one, the efficiency is regarded as an objective function â where we model query costs as an objective function, and using approximation algorithm techniques to get a good design of efficient strategy; the third one, the efficiency is in fact posed as a constraint of memory and we design algorithm under this constraint.Cette thĂšse prĂ©sente trois travaux liĂ©s Ă la conception dâalgorithmes efficaces ap- plicables Ă des graphes de grande taille.Dans le premier travail, nous nous plaçons dans le cadre du calcul centralisĂ©, et ainsi la question de la gĂ©nĂ©ralisation des dĂ©compositions modulaires et de la conception dâun algorithme efficace pour ce problĂšme. La dĂ©composition modulaire et la dĂ©tection de module, sont des moyens de rĂ©vĂ©ler et dâanalyser les propriĂ©tĂ©s modulaires de donnĂ©es structurĂ©es. Comme la dĂ©composition modulaire classique est bien Ă©tudiĂ©e et possĂšde un algorithme de temps linĂ©aire optimal, nous Ă©tudions dâabord les gĂ©nĂ©ralisations de ces concepts en hypergraphes. Câest un sujet peu Ă©tudiĂ© mais qui permet de trouver de nouvelles structurations dans les familles de parties. Nous prĂ©sentons ici des rĂ©sultats positifs obtenus pour trois dĂ©finitions de la dĂ©composition modulaire dans les hypergraphes de la littĂ©rature. Nous considĂ©rons Ă©galement la gĂ©nĂ©ralisation en permettant des erreurs dans les modules de graphes classiques et prĂ©sentons des rĂ©sultats nĂ©gatifs pour deux telles dĂ©finitions.Le deuxiĂšme travail est sur des requĂȘtes de donnĂ©es dans un graphe. Ici, le modĂšle diffĂšre des scĂ©narios classiques dans le sens que nous ne concevons pas dâalgorithmes pour rĂ©soudre un problĂšme original, mais nous supposons quâil existe un oracle four- nissant des informations partielles sur la solution de problĂšme initial, oĂč les oracle ont une consommation de temps ou de ressources de requĂȘte que nous modĂ©lisons en tant que coĂ»ts, et nous avons besoin dâun algorithme dĂ©cidant comment interroger efficacement lâoracle pour obtenir la solution exacte au problĂšme initial. LâefficacitĂ© ici concerne le coĂ»t de la requĂȘte. Nous Ă©tudions un problĂšme de la mĂ©thode de di- chotomie gĂ©nĂ©ralisĂ©e pour lequel nous calculons une stratĂ©gie dâinterrogation efficace afin de trouver une cible cachĂ©e dans le graphe. Nous prĂ©sentons les rĂ©sultats de nos travaux sur lâapproximation de la stratĂ©gie optimale de recherche en dichotomie gĂ©nĂ©ralisĂ©e sur les arbres pondĂ©rĂ©s.Notre troisiĂšme travail est sur la question de lâefficacitĂ© de la mĂ©moire. La config- uration dans laquelle nous Ă©tudions sont des calculs distribuĂ©s et avec la limitation en mĂ©moire. Plus prĂ©cisĂ©ment, chaque nĆud stocke ses donnĂ©es locales en Ă©changeant des donnĂ©es par transmission de messages et est en mesure de procĂ©der Ă des calculs locaux. Ceci est similaire au modĂšle LOCAL / CONGEST en calcul distribuĂ©, mais notre modĂšle requiert en outre que chaque nĆud ne puisse stocker quâun nombre constant de variables w.r.t. son degrĂ©. Ce modĂšle peut Ă©galement dĂ©crire des al- gorithmes naturels. Nous implĂ©mentons une procĂ©dure existante de repondĂ©ration multiplicative pour approximer le problĂšme de flux maximal sur ce modĂšle.Dâun point de vue mĂ©thodologique, les trois types dâefficacitĂ© que nous avons Ă©tudiĂ©es correspondent aux trois types de scĂ©narios suivants:â Le premier est le plus classique. ConsidĂ©rant un problĂšme, nous essayons de concevoir Ă la main lâalgorithme le plus efficace.â Dans le second, lâefficacitĂ© est considĂ©rĂ©e comme un objectif. Nous mod- Ă©lisons les coĂ»ts de requĂȘte comme une fonction objectif, et utilisons des techniques dâalgorithme dâapproximation pour obtenir la conception dâune stratĂ©gie efficace.â Dans le troisiĂšme, lâefficacitĂ© est en fait posĂ©e comme une contrainte de mĂ©moire et nous concevons un algorithme sous cette contrainte
Efficient and reliable MAC-layer broadcast for IEEE 802.15.4 Wireless Sensor Networks
International audienceIEEE 802.15.4 represents a widely used MAC-layer standard for Wireless Sensor Networks. In multihop topologies, the protocol exploits a cluster-tree and organizes the transmis- sions by alternating sleeping and active periods in a superframe delimited by beacons. In this paper, we propose a new Contention Broadcast Only Period to limit beacon collisions and to reduce bandwidth wastage due to variable beacon durations. We adopt a CSMA-approach during the Contention Broadcast Only Period to efficiently deliver both beacon and broadcast packets. We also propose to use broadcast sequence numbers for a reliable MAC- layer broadcast delivery, for both cluster-tree and radio neigh- bors. Simulations with realistic conditions prove the relevance of this approach. We increase energy savings by reducing idle listening, and improve the MAC-layer broadcast reliability for both radio and cluster-tree delivery
A General Algorithmic Scheme for Modular Decompositions of Hypergraphs and Applications
International audienc
Distributed Scheduling of Enhanced Beacons for IEEE802.15.4-TSCH Body Area Networks
best paper awardInternational audienceBody Area Networks (BANs) expect to exploit IEEE802.15.4-2015-TSCH, proposing an efficient MAC layer for wireless industrial sensor networks. The standard relies on techniques such as channel hopping and bandwidth reservation to ensure both energy savings and reliable transmissions. With the expected growth of the BAN usage, we must now consider dense topologies, and interference. In this paper, we propose a rescheduling algorithm to avoid the collisions among the Enhanced Beacons (EB): each coordinator is able to adapt distributively its transmission to avoid interference. Indeed, EB losses impact negatively the performance of a BAN. We also optimized conjointly the neighbor discovery mechanism since a multichannel MAC would else increase too much the discovery delay. Our simulations validate the relevance of our discovery and scheduling mechanisms to cope with a very dense deployment of interfering BANs